skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Matusovich, HM"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Doctoral students from underrepresented groups in engineering tend to complete their degrees at lower rates than their White and International peers. Research indicates that early support in doctoral programs, rather than later remedial efforts, can lead to long-term success. To that effect, we designed the Rising Doctoral Institute (RDI), an early information intervention for minoritized doctoral students. In this work in progress, we specifically explore how this intervention supports doctoral student agency throughout the first year of the doctoral experience We address the following questions: 1) How can agency be encouraged/promoted among minority students in the first year of the engineering PhD? Employing a longitudinal qualitative design, we conducted monthly interviews with six participants throughout their first year of doctoral study in engineering programs. We ground our work in Klemenčič’s Student Agency Model, focused on experiences affecting their persistence, to help uncover different aspects of agency that can manifest throughout this period of time. Preliminary results reveal that students cultivate agency through self-regulation, self-direction, self-determination, and self-efficacy, evident in their planning, motivation, and community engagement. Future work will focus on uncovering the specific mechanisms through which agency is enhanced. By linking positive first-year experiences to agency development, this research can guide interventions and tools for engineering departments to support student persistence. 
    more » « less
    Free, publicly-accessible full text available October 31, 2026
  2. In this full research paper, we discuss the benefits and challenges of using GPT-4 to perform qualitative analysis to identify faculty’s mental models of assessment. Assessments play an important role in engineering education. They are used to evaluate student learning, measure progress, and identify areas for improvement. However, how faculty members approach assessments can vary based on several factors, including their own mental models of assessment. To understand the variation in these mental models, we conducted interviews with faculty members in various engineering disciplines at universities across the United States. Data was collected from 28 participants from 18 different universities. The interviews consisted of questions designed to elicit information related to the pieces of mental models (state, form, function, and purpose) of assessments of students in their classrooms. For this paper, we analyzed interviews to identify the entities and entity relationships in participant statements using natural language processing and GPT-4 as our language model. We then created a graphical representation to characterize and compare individuals’ mental models of assessment using GraphViz. We asked the model to extract entities and their relationships from interview excerpts, using GPT-4 and instructional prompts. We then compared the results of GPT-4 from a small portion of our data to entities and relationships that were extracted manually by one of our researchers. We found that both methods identified overlapping entity relationships but also discovered entities and relationships not identified by the other model. The GPT-4 model tended to identify more basic relationships, while manual analysis identified more nuanced relationships. Our results do not currently support using GPT-4 to automatically generate graphical representations of faculty’s mental models of assessments. However, using a human-in-the-loop process could help offset GPT-4’s limitations. In this paper, we will discuss plans for our future work to improve upon GPT-4’s current performance. 
    more » « less
  3. In this full research paper, we discuss the benefits and challenges of using GPT-4 to perform qualitative analysis to identify faculty’s mental models of assessment. Assessments play an important role in engineering education. They are used to evaluate student learning, measure progress, and identify areas for improvement. However, how faculty members approach assessments can vary based on several factors, including their own mental models of assessment. To understand the variation in these mental models, we conducted interviews with faculty members in various engineering disciplines at universities across the United States. Data was collected from 28 participants from 18 different universities. The interviews consisted of questions designed to elicit information related to the pieces of mental models (state, form, function, and purpose) of assessments of students in their classrooms. For this paper, we analyzed interviews to identify the entities and entity relationships in participant statements using natural language processing and GPT-4 as our language model. We then created a graphical representation to characterize and compare individuals’ mental models of assessment using GraphViz. We asked the model to extract entities and their relationships from interview excerpts, using GPT-4 and instructional prompts. We then compared the results of GPT-4 from a small portion of our data to entities and relationships that were extracted manually by one of our researchers. We found that both methods identified overlapping entity relationships but also discovered entities and relationships not identified by the other model. The GPT-4 model tended to identify more basic relationships, while manual analysis identified more nuanced relationships. Our results do not currently support using GPT-4 to automatically generate graphical representations of faculty’s mental models of assessments. However, using a human-in-the-loop process could help offset GPT-4’s limitations. In this paper, we will discuss plans for our future work to improve upon GPT-4’s current performance. 
    more » « less
  4. The emergence of generative artificial intelligence (GAI) has started to introduce a fundamental reexamination of established teaching methods. These GAI systems offer a chance for both educators and students to reevaluate their academic endeavors. Reevaluation of current practices is particularly pertinent in assessment within engineering instruction, where advanced generative text algorithms are proficient in addressing intricate challenges like those found in engineering courses. While this juncture presents a moment to revisit general assessment methods, the actual response of faculty to the incorporation of GAI in their evaluative techniques remains unclear. To investigate this, we have initiated a study delving into the mental constructs that engineering faculty hold about evaluation, focusing on their evolving attitudes and responses to GAI, as reported in the Fall of 2023. Adopting a long-term data-gathering strategy, we conducted a series of surveys, interviews, and recordings targeting the evaluative decision-making processes of a varied group of engineering educators across the United States. This paper presents the data collection process, our participants’ demographics, our data analysis plan, and initial findings based on the participants’ backgrounds, followed by our future work and potential implications. The analysis of the collected data will utilize qualitative thematic analysis in the next step of our study. Once we complete our study, we believe our findings will sketch the early stages of this emerging paradigm shift in the assessment of undergraduate engineering education, offering a novel perspective on the discourse surrounding evaluation strategies in the field. These insights are vital for stakeholders such as policymakers, educational leaders, and instructors, as they have significant ramifications for policy development, curriculum planning, and the broader dialogue on integrating GAI into educational evaluation. 
    more » « less